Workshop on Computational Models in Social Epistemology
Bochum, Dec 6-8 2023
https://github.com/debatelab/genai-epistemology
📄 Wang et al. (2023)
📄 Betz (2022)
📄 Du et al. (2023)
Prompt: “These are the solutions to the problem from other agents: <other agent responses> Using the reasoning from other agents as additional advice, can you give an updated answer? Examine your solution and that other agents. Put your answer in the form (X) at the end of your response.”
📄 Du et al. (2023)
class AbstractBCAgent():
def update(self, community):
opinions = [peer.opinion for peer in self.peers(community)]
self.opinion = self.revise(opinions)
def peers(self, community):
peers = [
agent for agent in community
if self.distance(agent.opinion) <= epsilon
]
return peers
def distance(self, opinion):
pass
def revise(self, opinions):
passclass NumericalBCAgent(AbstractBCAgent):
def distance(self, opinion):
"""calculates distance between agent's and other opinion"""
return abs(opinion - self.opinion)
def revise(self, opinions):
"""revision through weighted opinion averaging"""
alpha = self._parameters.get("alpha", .5)
revision = alpha * self.opinion + (1-alpha) * np.mean(opinions)
return revisionclass NumericalBCAgent(AbstractBCAgent):
def distance(self, opinion):
"""calculates distance between agent's and other opinion"""
return abs(opinion - self.opinion)
def revise(self, opinions):
"""revision through weighted opinion averaging"""
alpha = self._parameters.get("alpha", .5)
revision = alpha * self.opinion + (1-alpha) * np.mean(opinions)
return revisionclass NaturalLanguageBCAgent(AbstractBCAgent):
def distance(self, other):
"""distance as expected agreement level"""
lmql_result = agreement_lmq(
self.opinion, other, **kwargs
)
probs = lmql_result.variables.get("P(LABEL)")
return sum([i*v for i, (_, v) in enumerate(probs)])/4.0
def revise(self, peer_opinions):
"""natural language opinion revision"""
revision = revise_lmq(
self.opinion, peer_opinions, **kwargs
)
return revisionalpha=“very high”; epsilon=0.40.5; topic=“veganism”
🤔 Are LLMs suited for building epistemic agents?
📄 Pan et al. (2023)
📄 Morris et al. (2023)
📄 AI4Science and Quantum (2023)
📄 Betz and Richardson (2023)
But humans’ cognitive architecture is fundamentally different from LLMs’ , or is it?
📄 Goldstein et al. (2020)
📄 The neural architecture of language: Integrative modeling converges on predictive processing. (Schrimpf et al. 2021)
TLDR It is found that the most powerful “transformer” models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities […].
📄 Brains and algorithms partially converge in natural language processing. (Caucheteux and King 2022)
TLDR This study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.
📄 Mapping Brains with Language Models: A Survey. (Karamolegkou, Abdou, and Søgaard 2023)
ABSTRACT […] We also find that the accumulated evidence, for now, remains ambiguous, but correlations with model size and quality provide grounds for cautious optimism.
📄 Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training. (Hosseini et al. 2022)
TLDR [A] developmentally realistic amount of training may suffice and […] models that have received enough training to achieve sufficiently high next-word prediction performance also acquire representations of sentences that are predictive of human fMRI responses.
LLMs suited for building epistemic agents?
Come, join the party! 🎉
Vanishing distinctions (due to AGI):
Epistemic redundancy (due to AGI) brings profound philosophical challenges: